What major ethical issues affect empirical research on law?
In this section of their guide to research ethics Mark Israel and Iain Hay (Flinders University, Australia) consider research practices around informed consent, confidentiality, harms and benefits, research integrity and researchers’ relationships.
Informed consent
Most professional and institutional, national and international guidelines and ethical codes for research demand that, other than in exceptional circumstances, participants agree to research before it commences. That consent should be both informed and voluntary.
In most circumstances researchers must provide potential participants with information about the purpose, methods, demands, risks, inconveniences, discomforts and possible outcomes of the research, including whether and how the research results might be disseminated. What is going to happen to them and why? How long will it take? What are the risks? What are the potential benefits? Who is funding the work?
In some cases providing information to ensure informed consent may take considerable time and effort for both researchers and research participants. In other cases it may be sufficient to provide potential participants with a list of their entitlements and a range of information they can request. Researchers are generally expected to record participants’ agreement to take part.
Generally researchers have to negotiate consent from all relevant people (and organisations, groups or community elders) for all relevant matters, and, possibly, at all relevant times. Several researchers have argued that consent should be dynamic and continuous and not limited to the beginning of the research project.
Faden & Beauchamp depicted informed consent as an autonomous action, committed intentionally, with understanding, and without controlling influences resulting either from coercion or manipulation by others or from psychiatric disorders. However, researchers may find it difficult to assess whether a potential participant’s circumstances allow them such freedom. In consequence, special procedures are often adopted when attempting to obtain consent or assent from vulnerable and dependent groups.
The complexities of informed consent have proved particularly problematic for researchers engaged in covert research or deception. Deception could compromise both the informed and voluntary nature of consent, but some researchers have argued that consent need not be obtained where any harm caused by lack of consent might be outweighed by the public benefit obtained. In addition, it might be impossible to gain access to some participants if other people are not deceived.
Researchers have also had difficulty with the ethics review process when institutionally standardised consent processes have been imposed mandating excessively formal information sheets or signed consent forms. This might jeopardise the safety and autonomy of research participants and the quality of the research, as well as the integrity of the consent process itself.
The Wichita jury study
In 1954 Chicago social scientists covertly recorded the discussions of six Wichita federal district court civil juries (Broeder, 1958). The study had the consent of judges, prosecution and defence lawyers, but not of the jurors. Although the chiefjJudge of the United States Court of Appeals (Tenth Judicial District) had initially required the researchers to inform jurors of the research, he agreed to waive this restriction in a limited number of cases.
When the research project became public knowledge, it drew a storm of protest and led to an investigation by the Internal Security Subcommittee of the Senate Committee on the Judiciary, chaired by a Democrat senator, James O Eastland. The project allowed conservatives to attack the role of social research. Eastland saw the project as posing a potential threat to the integrity of the jury system and therefore as an attack on the internal security of the United States. According to Vaughan (1967), Eastland’s Committee “apparently seriously entertained the possibility that a communist plot was operating through the research” (p68). Following the committee’s report the federal government and most states banned access by researchers to the jury room, a ban that still applies to most jurisdictions.
The Stanford prison experiment
In 1971 the psychologist Philip Zimbardo created a mock prison at Stanford University and recruited 24 male student volunteers as guards and prisoners. The volunteers had answered an advert in a local student newspaper, and completed informed consent forms “indicating that some of their basic civil rights would be violated if they were selected for the prisoner role and that only minimally adequate diet and health care would be provided” (Zimbardo in Zimbardo et al, 1999).
The research into the effects of the institutional setting was abandoned after six days, when the guards subjected prisoners to physical and psychological abuse and many prisoners started to behave in pathological ways (Zimbardo, 1973). One psychologist who visited the experiment and whose intervention led to the end of the project described “feeling sick to my stomach by the sight of these sad boys so totally dehumanized” (Maslach in Zimbardo et al, 1999). A student who joined the experiment as a prisoner after it had been running for four days said later: “It was a prison to me. I don’t regard it as an experiment or simulation. It was a prison run by psychologists instead of run by the state” (quoted by Maslach in Zimbardo et al, 1999).Zimbardo (Zimbardo et al, 1999) acknowledged that the research had been “unethical because people suffered and others were allowed to inflict pain and humiliation” well beyond the point at which the experiment should have been called off. However, he also argued that there was no deception, because there had been consent.
While there may have been informed consent at the beginning of the experiment, it is not obvious that this consent continued throughout the experiment. Although five ‘prisoners’ were released before the end, this only occurred after one had had “an emotional breakdown”, three had “acted crazy” and another had broken out in a full body rash. Others may have wanted to leave, but there is some evidence that they may have believed that they could not. At one point one prisoner told the others that they would not be allowed to quit the experiment. Zimbardo described this as untrue, yet recognised that “shock waves” from the prisoner’s claim “reverberated through all the prisoners” and substantially altered their subsequent behaviour.
The Stanford prison experiment
Confidentiality
When people allow researchers to investigate them, they often negotiate terms for the agreement. Participants in research may, for example, consent on the basis that the information obtained about them will be used only by the researchers and only in particular ways. The information is private and is voluntarily offered to the researcher in confidence in exchange for possibly not very much direct benefit. While social science research participants might be hurt by insensitive data collection, often a more significant danger is posed by what happens to data after it has been collected.
In some research projects negotiations around confidentiality may be fairly straightforward. Some researchers operate in relatively predictable contexts, where standardised assurances may be included in a covering letter with a questionnaire. However, other work takes place in informal and unpredictable environments, where agreements need to be negotiated with individuals and groups and renegotiated during the course of lengthy fieldwork. A further complication may arise if the participant has commercial interests to protect, and the resources and expertise to ensure that these protections are stipulated in any agreement.
Obligations of confidentiality cannot be considered absolute, and in some situations – such as when researchers uncover gross injustice – they should contemplate disclosing to a particular person or group information received under an implied or explicit assurance of confidentiality.
While not every research participant may want to be offered or even warrant receiving assurances of confidentiality, most do. Social researchers have developed a range of methodological precautions in relation to collecting, analysing and storing data, as well as strategies to respond to challenges to the confidentiality of their data (Israel, 2004a). These include:
- not recording names and other data at all, or removing names and identifying details of sources from confidential data at the earliest possible stage
- disguising the name of the community where the research took place
- masking or altering data
- sending files out of the jurisdiction
- avoiding using the mail or telephone system, so that data cannot be intercepted or seized by police or intelligence agencies
Nevertheless, there are examples where British researchers have failed to hold data securely.
Some Canadian, Australian and American researchers may receive statutory protection for their data. Recognising that full confidentiality may not be assured, some Canadian and Australian research ethics committees have required researchers to offer only limited assurances of confidentiality, indicating to participants that they could be forced to hand data over to courts. This practice has been criticised as undermining the relationship of trust between researcher and participant. Nevertheless, several criminologists have indicated that they would breach confidentiality to protect vulnerable groups such as children, or to protect the security of correctional institutions.
Confidentiality and anonymity in socio-legal research
Few references to beneficence, confidentiality or informed consent in the context of research ethics exist in leading socio-legal journals. However, the issues can be complex. Take two examples from the American Law and Society Review. Danet et al (1980) sought to tape conversations between Boston lawyers and their clients. Concerns were raised that taping interviews might negate attorney-client privilege, as it might mean that the conversations were no longer made in confidence. This might mean that the tapes could be admitted against the client in court. Among other options, the researchers considered asking potential adversaries to waive their rights to the tapes. However, they soon realised that “lawyers would not even want to admit to the other side that tape recordings of privileged conversations existed” (p910). In the event, no lawyer that the researchers approached proved willing or able to recruit clients for the study.
In some cases, when dealing with elite groups such as judges or information that can be linked to publicly available data, identities may be easily discerned. Liu’s (2006) study of elite corporate lawyers in China involved interviews in six elite firms in Beijing. While interviewees were granted anonymity (p758) and the names of the firms omitted, one might imagine that members of the local profession might be able to identify a firm of 25 partners with 150 staff, founded in 1989 and restructured in 1992, working with foreign direct investments and securities. Liu’s decision is not necessarily inappropriate, and it certainly is not unusual. However, in an environment where researchers can find it difficult to gain access to corporate and other elites, socio-legal scholars need to initiate discussion of how they negotiate and the degree to which they can secure confidentiality.
Harms and benefits
Contemporary researchers are normally expected to minimise risks of harm or discomfort to participants in research projects (the principle of nonmaleficence). Although harm is most often thought of in physical terms, it also includes psychological, social and economic damage – concepts all too familiar to tort lawyers.
Researchers should try to avoid imposing even the risk of harm on others. Of course, most research involves some risk, generally at a level greater in magnitude than the minimal risk we tend to encounter in our everyday lives (Freedman, Fuks & Weijer, 1993). The extent to which researchers must avoid risks may depend on the degree of the risk (prevalence) as well as the weight of the consequences that may flow from it (magnitude). Researchers may adopt risk minimisation strategies which might involve monitoring participants, maintaining a safety net of professionals who can provide support in emergencies, excluding vulnerable individuals or groups from participation where justifiable, considering whether lower risk alternatives might be available, and anticipating and counteracting any distortion of research results that might act to the detriment of research participants. One way of responding to the possibility of harming participants is by incorporating in the planning and running of the research members of those communities who form the focus of the work.
In some circumstances researchers may also be expected to promote the wellbeing of participants or to maximise the benefits to society as a whole (the principle of beneficence). For example, in the case of domestic violence research studies could provide emotional and practical support for victims, offering information about, and organising access to, formal and informal services, providing feedback to the study community and relevant agencies, and supporting or engaging in advocacy on behalf of abused.
Even research that yields obvious benefits may have costs. It is likely to consume resources such as the time and salary of the researcher, or the time of participants. It may also have negative consequences, causing various harms. In general, obligations to do no harm override obligations to do good. However, there may be circumstances where this may not be the case, such as on those occasions where we might produce a major benefit while only inflicting a minor harm.
It may be tempting to over-generalise obligations of beneficence and nonmaleficence on the basis of principles developed to meet the needs of medical research. Indeed, several ethical codes do.
However, research undertaken in the social sciences may quite legitimately and deliberately work to the detriment of research subjects by revealing and critiquing their role in causing “fundamental economic, political or cultural disadvantage or exploitation” (ESRC Research Ethics Framework). Researchers uncovering corruption, violence or pollution need not work to minimise harm to the corporate or institutional entities responsible for the damage though, as far as the ESRC is concerned, they might be expected to minimise any personal harm. As the Canadian guidance acknowledges: ‘Such research should not be blocked through the use of harms/benefits analysis”.
Research integrity
Researchers owe a professional obligation to their colleagues to handle themselves honestly and with integrity. This covers both matters relating to a researcher’s own work and his or her colleagues’ scholarship – intellectual honesty in proposing, performing, and reporting research, accuracy in representing contributions to research proposals and reports, fairness in peer review and collegiality in scientific interactions, including communications and sharing of resources.
In 2000 the United States Office of Science and Technology Policy published the Federal policy on research misconduct. The policy defines research misconduct in terms of fabrication, falsification and plagiarism (or ‘ffp’):
- fabrication – making up data or results and recording or reporting them
- falsification – manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record
- plagiarism – the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit
In 2006 the British government, research and higher education funding councils and the pharmaceutical industry established the UK Research Integrity Office. UKRIO will provide support and advice to whistleblowers, and develop institutional codes of good practice. Currently its work only extends to health and biomedical sciences, and it has decided not to examine specific cases of misconduct.
Researchers face enormous pressures to publish or, at least to look like they are publishing, as they struggle to obtain grants or jobs. One consequence has been that problems arise over the attribution of authorship, either because someone who has insignificant involvement has been added – gift authorship – or because junior staff who made significant contributions have been omitted – ghost authorship.
The International Committee of Medical Journal Editors (2001), under the Vancouver Protocol, requires the following three conditions to be met if someone is to be included as an author:
- substantial contribution to conception and design, or acquisition of data, or analysis and interpretation of data.
- drafting the article or revising it critically for important intellectual content
- final approval of the version to be published
Conflicts of interests occur in social science when researchers have coexisting personal, financial, political and academic interests, and, the potential exists for one interest to be favoured over another that has equal or even greater legitimacy, in a way that might make other reasonable people feel misled or deceived. Researchers risk appearing negligent, incompetent or deceptive.
Such conflicts have been best explored in the biomedical literature, where academics obtaining financial benefit from industry through research funding, consultancies, royalties or by holding shares in companies are more likely to reach conclusions that favour their corporate sponsor. On some occasions they have conducted research of lower quality and less open to peer review.
Although social scientists may be less likely to have a financial stake in their research area, they may still have to negotiate financial or contractual relationships with corporations or government agencies. So, should they accept contracts where clients hold a veto over publication, disclose corporate or government affiliations when advising the public or publishing research, assess grant applications from commercial competitors? Many research institutions are developing enterprise cultures which make such conflicts of interest more likely.
Qualitative researchers often use ‘conflict of interest’ to describe role conflicts – where their relationships with research participants involve multiple roles as researchers as well as perhaps as teachers, clinicians, activists, colleagues or friends. This can occur wherever researchers are embedded as insiders in the research site, notably in action research. In such circumstances it may be particularly difficult to negotiate informed consent, guard confidentiality, avoid harm and convince research ethics committees that the research relationship has not been exploitative.
Institutional conflicts of interest may influence the governance and conduct of research. Some ethically acceptable research proposals might be blocked in the ethics review process because of, for example, a desire by the reviewing institution to avoid legal action. Commercial relationships maintained by research institutions can also place individual researchers in invidious positions – even if individual researchers are not directly compromised by their home institution’s corporate relationships, they could be influenced by the knowledge that their own institution’s financial health may be affected by the results of their research or, at least, be seen to be influenced.
Researchers’ relationships
While most work on research ethics is based on universal notions of justice, since the late 1970s feminist writers such as Gilligan (1982), Baier (1985) and Noddings (2003) have elaborated an ethics of care. For such writers people develop and act as moral agents embedded in social relationships based on care, compassion and willingness both to listen, include and support those people with whom one has significant relationships. An ethics of care has obvious implications for ethics in research. Among other things it forces us to think about a broad range of relationships that fall well outside those with research participants and the academy, the traditional focus for most codes of research ethics.
Social scientists sometimes work in teams, and senior researchers may have supervisory responsibility for junior colleagues. Team leaders have responsibility for the ethical behaviour of members of their staff and for ensuring that team members are appropriately briefed “about the purpose, methods and intended possible uses of the research, what their participation in the research entails and what risks, if any, are involved (ESRC Research Ethics Framework). Team leaders must also ensure the physical safety and emotional wellbeing of their staff. Some projects require members of the research team to deal repeatedly with subject matter that might have a traumatic effect on researchers (see, for example, Campbell, 2002).
As pressures increase on academics to find external funding for their research, and as the centre of academic enterprise has moved from a humanities and social science core to “an entrepreneurial periphery” (Slaughter & Leslie, 1997 p208), many university based social scientists may find themselves working for clients. As an employee or consultant, researchers and their institutions may be bound to secrecy or ‘commercial in confidence’ agreements. They may be questioned about the propriety of accepting money from a particular source, be it American counter-insurgency programs or tobacco companies (THES, 2000) and find themselves increasingly vulnerable to charges of conflicts of interest, or having their own interpretation of the need to minimise harm and maximise benefits to participants challenged by colleagues and sponsors.
Last Modified: 4 June 2010
Comments
There are no comments at this time